521 research outputs found

    Reading is Hard just because Listening is Easy

    Full text link
    In their work on reading and the difficulties that attend it. investigators have commonly omitted to ask the questions that are. in my view, prior to all others: why is it easier to perceive speech than to read, and why is it easier to speak a word than to spell it? My aim is to repair these omissions. To that end, I divide my talk into two parts. First, I say why we should consider that the greater ease of perceiving and producing speech is paradoxical, by which I mean to suggest that the reasons are not to be found among surface appearances. Then I propose how, by going beneath the surface, we can find the reasons, and so resolve the paradox. THE PARADOX Before developing the paradox. I should first remind you that perceiving and produCing speech are easier than reading or writing. for thiS is the point from which I depart and to which I will. at the end. return. The relevant facts include the following. (1) All communities of human beings have a fully developed spoken language; in contrast. only a minority of these languages has a written form. and not all of these are in common use. (2) Speech is first in the history of the race, as it is in the child; readin

    How does cognitive load influence speech perception? : An encoding hypothesis

    Get PDF
    Two experiments investigated the conditions under which cognitive load exerts an effect on speech perception. These experiments extend earlier research by using a different speech perception task (four-interval oddity task) and by implementing cognitive load through a task often thought to be modular, namely, face processing. In the cognitive-load conditions, participants were required to remember two faces presented before the speech stimuli. In Experiment 1, performance in the speech-perception task under cognitive load was not impaired in comparison to a no-load baseline condition. In Experiment 2, we modified the load condition minimally such that it required encoding of the two faces simultaneously with the speech stimuli. As a reference condition, we also used a visual search task that in earlier experiments had led to poorer speech perception. Both concurrent tasks led to decrements in the speech task. The results suggest that speech perception is affected even by loads thought to be processed modularly, and that, critically, encoding in working memory might be the locus of interference

    Recognizing Speech in a Novel Accent: The Motor Theory of Speech Perception Reframed

    Get PDF
    The motor theory of speech perception holds that we perceive the speech of another in terms of a motor representation of that speech. However, when we have learned to recognize a foreign accent, it seems plausible that recognition of a word rarely involves reconstruction of the speech gestures of the speaker rather than the listener. To better assess the motor theory and this observation, we proceed in three stages. Part 1 places the motor theory of speech perception in a larger framework based on our earlier models of the adaptive formation of mirror neurons for grasping, and for viewing extensions of that mirror system as part of a larger system for neuro-linguistic processing, augmented by the present consideration of recognizing speech in a novel accent. Part 2 then offers a novel computational model of how a listener comes to understand the speech of someone speaking the listener's native language with a foreign accent. The core tenet of the model is that the listener uses hypotheses about the word the speaker is currently uttering to update probabilities linking the sound produced by the speaker to phonemes in the native language repertoire of the listener. This, on average, improves the recognition of later words. This model is neutral regarding the nature of the representations it uses (motor vs. auditory). It serve as a reference point for the discussion in Part 3, which proposes a dual-stream neuro-linguistic architecture to revisits claims for and against the motor theory of speech perception and the relevance of mirror neurons, and extracts some implications for the reframing of the motor theory

    No Language-Specific Activation during Linguistic Processing of Observed Actions

    Get PDF
    It has been suggested that cortical neural systems for language evolved from motor cortical systems, in particular from those fronto-parietal systems responding also to action observation. While previous studies have shown shared cortical systems for action--or action observation--and language, they did not address the question of whether linguistic processing of visual stimuli occurs only within a subset of fronto-parietal areas responding to action observation. If this is true, the hypothesis that language evolved from fronto-parietal systems matching action execution and action observation would be strongly reinforced.We used functional magnetic resonance imaging (fMRI) while subjects watched video stimuli of hand-object-interactions and control photo stimuli of the objects and performed linguistic (conceptual and phonological), and perceptual tasks. Since stimuli were identical for linguistic and perceptual tasks, differential activations had to be related to task demands. The results revealed that the linguistic tasks activated left inferior frontal areas that were subsets of a large bilateral fronto-parietal network activated during action perception. Not a single cortical area demonstrated exclusive--or even simply higher--activation for the linguistic tasks compared to the action perception task.These results show that linguistic tasks do not only share common neural representations but essentially activate a subset of the action observation network if identical stimuli are used. Our findings strongly support the evolutionary hypothesis that fronto-parietal systems matching action execution and observation were co-opted for language, a process known as exaptation

    A Bayesian explanation of the 'Uncanny Valley' effect and related psychological phenomena

    Get PDF
    There are a number of psychological phenomena in which dramatic emotional responses are evoked by seemingly innocuous perceptual stimuli. A well known example is the ‘uncanny valley’ effect whereby a near human-looking artifact can trigger feelings of eeriness and repulsion. Although such phenomena are reasonably well documented, there is no quantitative explanation for the findings and no mathematical model that is capable of predicting such behavior. Here I show (using a Bayesian model of categorical perception) that differential perceptual distortion arising from stimuli containing conflicting cues can give rise to a perceptual tension at category boundaries that could account for these phenomena. The model is not only the first quantitative explanation of the uncanny valley effect, but it may also provide a mathematical explanation for a range of social situations in which conflicting cues give rise to negative, fearful or even violent reactions

    Are bisphosphonates effective in the treatment of osteoarthritis pain? A meta-analysis and systematic review.

    Get PDF
    Osteoarthritis (OA) is the most common form of arthritis worldwide. Pain and reduced function are the main symptoms in this prevalent disease. There are currently no treatments for OA that modify disease progression; therefore analgesic drugs and joint replacement for larger joints are the standard of care. In light of several recent studies reporting the use of bisphosphonates for OA treatment, our work aimed to evaluate published literature to assess the effectiveness of bisphosphonates in OA treatment

    The Use of Phonetic Motor Invariants Can Improve Automatic Phoneme Discrimination

    Get PDF
    affiliation: Castellini, C (Reprint Author), Univ Genoa, LIRA Lab, Genoa, Italy. Castellini, Claudio; Metta, Giorgio; Tavella, Michele, Univ Genoa, LIRA Lab, Genoa, Italy. Badino, Leonardo; Metta, Giorgio; Sandini, Giulio; Fadiga, Luciano, Italian Inst Technol, Genoa, Italy. Grimaldi, Mirko, Salento Univ, CRIL, Lecce, Italy. Fadiga, Luciano, Univ Ferrara, DSBTA, I-44100 Ferrara, Italy. article-number: e24055 keywords-plus: SPEECH-PERCEPTION; RECOGNITION research-areas: Science & Technology - Other Topics web-of-science-categories: Multidisciplinary Sciences author-email: [email protected] funding-acknowledgement: European Commission [NEST-5010, FP7-IST-250026] funding-text: The authors acknowledge the support of the European Commission project CONTACT (grant agreement NEST-5010) and SIEMPRE (grant agreement number FP7-IST-250026). The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. number-of-cited-references: 31 times-cited: 0 journal-iso: PLoS One doc-delivery-number: 817OO unique-id: ISI:000294683900024We investigate the use of phonetic motor invariants (MIs), that is, recurring kinematic patterns of the human phonetic articulators, to improve automatic phoneme discrimination. Using a multi-subject database of synchronized speech and lips/tongue trajectories, we first identify MIs commonly associated with bilabial and dental consonants, and use them to simultaneously segment speech and motor signals. We then build a simple neural network-based regression schema (called Audio-Motor Map, AMM) mapping audio features of these segments to the corresponding MIs. Extensive experimental results show that (a) a small set of features extracted from the MIs, as originally gathered from articulatory sensors, are dramatically more effective than a large, state-of-the-art set of audio features, in automatically discriminating bilabials from dentals; (b) the same features, extracted from AMM-reconstructed MIs, are as effective as or better than the audio features, when testing across speakers and coarticulating phonemes; and dramatically better as noise is added to the speech signal. These results seem to support some of the claims of the motor theory of speech perception and add experimental evidence of the actual usefulness of MIs in the more general framework of automated speech recognition

    A parsimonious oscillatory model of handwriting

    Get PDF
    International audienceWe propose an oscillatory model that is theoretically parsimonious, empirically efficient and biologically plausible. Building on Hollerbach’s (Biol Cybern 39:139–156, 1981) model, our Parsimonious Oscillatory Model of Handwriting (POMH) overcomes the latter’s main shortcomings by making it possible to extract its parameters from the trace itself and by reinstating symmetry between the x and y coordinates. The benefit is a capacity to autonomously generate a smooth continuous trace that reproduces the dynamics of the handwriting movements through an extremely sparse model, whose efficiency matches that of other, more computationally expensive optimizing methods. Moreover, the model applies to 2D trajectories, irrespective of their shape, size, orientation and length. It is also independent of the endeffectors mobilized and of the writing direction
    • …
    corecore